最近的深度学习技术和精心设计的DEBIA算法,公正的排名学习(ULTR)问题已大大提高。但是,由于从那些流行的基准数据集中观察到的以下缺点,因此现有基准数据集的有希望的结果可能不会扩展到实际情况:(1)过时的语义功能提取,其中最先进的大规模预训练的预培训的语言由于原始文本的缺失,无法利用像伯特这样的模型;(2)不完整的显示功能,用于深入研究Ultr,例如,缺少显示的文档的摘要,用于分析单击必要的偏见; (3)缺乏现实世界的用户反馈,导致经验研究中合成数据集的普遍性。为了克服上述缺点,我们介绍了Baidu-ultr数据集。它涉及随机采样12亿次搜索会议和7,008个专家注释的查询,该查询比现有的搜索范围大。 Baidu-ultr提供:(1)原始语义功能和一个预先训练的语言模型,以方便使用; (2)足够的显示信息,例如位置,显示高度并显示了抽象,从而可以全面研究具有先进技术的不同偏见,例如因果发现和元学习; (3)搜索结果页面(SERP)等丰富的用户反馈,例如住宅时间,允许用户参与优化并促进ULTR中多任务学习的探索。在本文中,我们介绍了Baidu-Ultr的设计原理以及在此新数据资源上的基准超级算法的性能,有利于探索长尾查询和排名预培训任务的排名。 BAIDU-ULTR数据集和相应的基线实现可在https://github.com/chuxiaokai/baidu_ultr_dataset上获得。
translated by 谷歌翻译
与人类的视野相比,基于卷积神经网络(CNN)的计算机视觉更容易受到对抗性的噪音。这种差异可能归因于眼睛如何样本视觉输入以及大脑如何通过其背侧和腹侧视觉途径处理视网膜样品,这些途径尚未探索计算机视觉。受到大脑的启发,我们设计了复发性神经网络,包括模拟人类视网膜的输入采样器,它是一个指导下一步位置的背面网络,以及代表视网膜样品的腹网络。组合这些模块,这些模型学会了多一眼图像,每一眼就注意一个明显的部分,并随着时间的推移积累表示形式以识别图像。我们测试了此类模型的稳健性,并在不同水平的对抗噪声上测试,特别关注不同输入采样策略的效果。我们的发现表明,视网膜凹和采样使模型更加可靠,并且在给予更长的时间以更多地看一眼图像时,该模型可能会从攻击中纠正自身。总之,强大的视觉识别可以从三种受脑启发的机制的综合使用中受益:视网膜转化,注意力引导的眼动运动和经常性处理,而不是仅喂食的CNN。
translated by 谷歌翻译
大型语言模型在各种问题答案(QA)基准测试方面取得了高度的性能,但其产出的解释性仍然难以捉摸。最近建议将结构化的解释称为“综合树”,以解释和检查质量检查系统的答案。为了更好地生成此类树木,我们提出了一种称为迭代检索生成推理​​器(IRGR)的架构。我们的模型能够通过系统地生成文本前提的分步解释来解释给定的假设。 IRGR模型迭代地搜索合适的场所,一次构建单个零件步骤。与以前的方法相反,我们的方法结合了生成步骤和房屋的检索,允许模型利用中间结论,并减轻基线编码器模型的输入大小限制。我们使用IntailmentBank数据集进行实验,在该数据集中,我们在前提检索和索引树上的现有基准优于现有的基准,总体正确性增长了约300%。
translated by 谷歌翻译
音乐流派分类不平衡是音乐信息检索(MIR)领域的一项关键任务,用于识别基于相关音乐音频段的长尾,贫困类型的长尾,贫困类型,这在现实世界中非常普遍。大多数现有模型都是为级别平衡的音乐数据集而设计的,在识别发行尾部的音乐流派时,准确性和泛化的性能差。受到在各种分类任务中引入多实体学习(MIL)的成功的启发,我们提出了一种名为Multi-Instance注意(Matt)的新型机制,以提高识别尾巴类别的性能。具体来说,我们首先通过生成专辑 - 艺术家配对袋来构建行李级数据集。其次,我们利用神经网络编码音乐音频段。最后,在多实施注意机制的指导下,基于神经网络的模型可以选择最有用的类型来匹配给定的音乐段。关于具有长尾分布的大规模音乐类型基准数据集的全面实验结果表明,马特的表现明显优于其他最先进的基线。
translated by 谷歌翻译
经过审计的语言模型(PTLMS)通常是通过大型静态语料库学习的,并针对各种下游任务进行了微调。但是,当部署在现实世界中时,基于PTLM的模型必须处理偏离PTLM最初培训的数据分布。在本文中,我们研究了一个终身语言模型预处理挑战,其中不断更新PTLM以适应新兴数据。在域内收入的研究纸流和按时间顺序排序的推文流上,我们从具有不同持续学习算法的PTLM逐渐预处理PTLM,并跟踪下游任务性能(经过微调之后)。我们评估了PTLM在保留早期语料库中学习知识的同时适应新语料库的能力。我们的实验表明,基于蒸馏的方法最有效地在早期域中保持下游性能。该算法还可以改善知识传递,从而使模型能够比最新数据实现更好的下游性能,并在由于时间而在培训和评估之间存在分配差距时改善时间概括。我们认为,我们的问题制定,方法和分析将激发未来的研究朝着语言模型的持续预处理。
translated by 谷歌翻译
In this paper, we propose a robust 3D detector, named Cross Modal Transformer (CMT), for end-to-end 3D multi-modal detection. Without explicit view transformation, CMT takes the image and point clouds tokens as inputs and directly outputs accurate 3D bounding boxes. The spatial alignment of multi-modal tokens is performed implicitly, by encoding the 3D points into multi-modal features. The core design of CMT is quite simple while its performance is impressive. CMT obtains 73.0% NDS on nuScenes benchmark. Moreover, CMT has a strong robustness even if the LiDAR is missing. Code will be released at https://github.com/junjie18/CMT.
translated by 谷歌翻译
Knowledge graphs (KG) have served as the key component of various natural language processing applications. Commonsense knowledge graphs (CKG) are a special type of KG, where entities and relations are composed of free-form text. However, previous works in KG completion and CKG completion suffer from long-tail relations and newly-added relations which do not have many know triples for training. In light of this, few-shot KG completion (FKGC), which requires the strengths of graph representation learning and few-shot learning, has been proposed to challenge the problem of limited annotated data. In this paper, we comprehensively survey previous attempts on such tasks in the form of a series of methods and applications. Specifically, we first introduce FKGC challenges, commonly used KGs, and CKGs. Then we systematically categorize and summarize existing works in terms of the type of KGs and the methods. Finally, we present applications of FKGC models on prediction tasks in different areas and share our thoughts on future research directions of FKGC.
translated by 谷歌翻译
Few Shot Instance Segmentation (FSIS) requires models to detect and segment novel classes with limited several support examples. In this work, we explore a simple yet unified solution for FSIS as well as its incremental variants, and introduce a new framework named Reference Twice (RefT) to fully explore the relationship between support/query features based on a Transformer-like framework. Our key insights are two folds: Firstly, with the aid of support masks, we can generate dynamic class centers more appropriately to re-weight query features. Secondly, we find that support object queries have already encoded key factors after base training. In this way, the query features can be enhanced twice from two aspects, i.e., feature-level and instance-level. In particular, we firstly design a mask-based dynamic weighting module to enhance support features and then propose to link object queries for better calibration via cross-attention. After the above steps, the novel classes can be improved significantly over our strong baseline. Additionally, our new framework can be easily extended to incremental FSIS with minor modification. When benchmarking results on the COCO dataset for FSIS, gFSIS, and iFSIS settings, our method achieves a competitive performance compared to existing approaches across different shots, e.g., we boost nAP by noticeable +8.2/+9.4 over the current state-of-the-art FSIS method for 10/30-shot. We further demonstrate the superiority of our approach on Few Shot Object Detection. Code and model will be available.
translated by 谷歌翻译
Graph Neural Networks (GNNs) have shown satisfying performance on various graph learning tasks. To achieve better fitting capability, most GNNs are with a large number of parameters, which makes these GNNs computationally expensive. Therefore, it is difficult to deploy them onto edge devices with scarce computational resources, e.g., mobile phones and wearable smart devices. Knowledge Distillation (KD) is a common solution to compress GNNs, where a light-weighted model (i.e., the student model) is encouraged to mimic the behavior of a computationally expensive GNN (i.e., the teacher GNN model). Nevertheless, most existing GNN-based KD methods lack fairness consideration. As a consequence, the student model usually inherits and even exaggerates the bias from the teacher GNN. To handle such a problem, we take initial steps towards fair knowledge distillation for GNNs. Specifically, we first formulate a novel problem of fair knowledge distillation for GNN-based teacher-student frameworks. Then we propose a principled framework named RELIANT to mitigate the bias exhibited by the student model. Notably, the design of RELIANT is decoupled from any specific teacher and student model structures, and thus can be easily adapted to various GNN-based KD frameworks. We perform extensive experiments on multiple real-world datasets, which corroborates that RELIANT achieves less biased GNN knowledge distillation while maintaining high prediction utility.
translated by 谷歌翻译
This paper focuses on designing efficient models with low parameters and FLOPs for dense predictions. Even though CNN-based lightweight methods have achieved stunning results after years of research, trading-off model accuracy and constrained resources still need further improvements. This work rethinks the essential unity of efficient Inverted Residual Block in MobileNetv2 and effective Transformer in ViT, inductively abstracting a general concept of Meta-Mobile Block, and we argue that the specific instantiation is very important to model performance though sharing the same framework. Motivated by this phenomenon, we deduce a simple yet efficient modern \textbf{I}nverted \textbf{R}esidual \textbf{M}obile \textbf{B}lock (iRMB) for mobile applications, which absorbs CNN-like efficiency to model short-distance dependency and Transformer-like dynamic modeling capability to learn long-distance interactions. Furthermore, we design a ResNet-like 4-phase \textbf{E}fficient \textbf{MO}del (EMO) based only on a series of iRMBs for dense applications. Massive experiments on ImageNet-1K, COCO2017, and ADE20K benchmarks demonstrate the superiority of our EMO over state-of-the-art methods, \eg, our EMO-1M/2M/5M achieve 71.5, 75.1, and 78.4 Top-1 that surpass \textbf{SoTA} CNN-/Transformer-based models, while trading-off the model accuracy and efficiency well.
translated by 谷歌翻译